31 research outputs found

    Adaptive design of experiment via normalizing flows for failure probability estimation

    Full text link
    Failure probability estimation problem is an crucial task in engineering. In this work we consider this problem in the situation that the underlying computer models are extremely expensive, which often arises in the practice, and in this setting, reducing the calls of computer model is of essential importance. We formulate the problem of estimating the failure probability with expensive computer models as an sequential experimental design for the limit state (i.e., the failure boundary) and propose a series of efficient adaptive design criteria to solve the design of experiment (DOE). In particular, the proposed method employs the deep neural network (DNN) as the surrogate of limit state function for efficiently reducing the calls of expensive computer experiment. A map from the Gaussian distribution to the posterior approximation of the limit state is learned by the normalizing flows for the ease of experimental design. Three normalizing-flows-based design criteria are proposed in this work for deciding the design locations based on the different assumption of generalization error. The accuracy and performance of the proposed method is demonstrated by both theory and practical examples.Comment: failure probability, normalizing flows, adaptive design of experiment. arXiv admin note: text overlap with arXiv:1509.0461

    The noncompact Schauder fixed point theorem in random normed modules

    Full text link
    Random normed modules (RNRN modules) are a random generalization of ordinary normed spaces, which are usually endowed with the two kinds of topologies -- the (ε,λ)(\varepsilon,\lambda)-topology and the locally L0L^0-convex topology. The purpose of this paper is to give a noncompact generalization of the classical Schauder fixed point theorem for the development and financial applications of RNRN modules. Motivated by the randomized version of the classical Bolzano-Weierstrauss theorem, we first introduce the two notions of a random sequentially compact set and a random sequentially continuous mapping under the (ε,λ)(\varepsilon,\lambda)-topology and further establish their corresponding characterizations under the locally L0L^0-convex topology so that we can treat the fixed point problems under the two kinds of topologies in an unified way. Then we prove our desired Schauder fixed point theorem that in a σ\sigma-stable RNRN module every continuous (under either topology) σ\sigma-stable mapping TT from a random sequentially compact closed L0L^0-convex subset GG to GG has a fixed point. The whole idea to prove the fixed point theorem is to find an approximate fixed point of TT, but, since GG is not compact in general, realizing such an idea in the random setting forces us to construct the corresponding Schauder projection in a subtle way and carry out countably many decompositions for TT so that we can first obtain an approximate fixed point for each decomposition and eventually one for TT by the countable concatenation skill. Besides, the new fixed point theorem not only includes as a special case Bharucha-Reid and Mukherjea's famous random version of the classical Schauder fixed point theorem but also implies the corresponding Krasnoselskii fixed point theorem in RNRN modules.Comment: 37 page

    A Social Platform for Knowledge Gathering and Exploitation, Towards the Deduction of Inter-enterprise Collaborations

    Get PDF
    AbstractSeveral standards have been defined for enhancing the efficiency of B2B web-supported collaboration. However, they suffer from the lack of a general semantic representation, which leaves aside the promise of deducing automatically the inter-enterprise business processes. To achieve the automatic deduction, this paper presents a social platform, which aims at acquiring knowledge from users and linking the acquired knowledge with the one maintained on the platform. Based on this linkage, this platform aims at deducing automatically cross-organizational business processes (i.e. selection of partners and sequencing of their activities) to fulfill any opportunity of collaboration

    Automatic Data Augmentation via Deep Reinforcement Learning for Effective Kidney Tumor Segmentation

    Full text link
    Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation. However, the data generated by these conventional augmentation methods are random and sometimes harmful to the subsequent segmentation. In this paper, we developed a novel automatic learning-based data augmentation method for medical image segmentation which models the augmentation task as a trial-and-error procedure using deep reinforcement learning (DRL). In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss. Specifically, the best sequential combination of different basic operations is automatically learned by directly maximizing the performance improvement (\ie, Dice ratio) on the available validation set. We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.Comment: 5 pages, 3 figure

    Immobilization of Laccase for Oxidative Coupling of Trans-Resveratrol and Its Derivatives

    Get PDF
    Trametes villosa Laccase (TVL) was immobilized through physical adsorption on SBA-15 mesoporous silica and the immobilized TVL was used in the oxidative coupling of trans-resveratrol. Higher loading and activity of the immobilized enzyme on SBA-15 were obtained when compared with the free enzyme. The effects of reaction conditions, such as buffer type, pH, temperature and substrate concentration were investigated, and the optimum conditions were screened and resulted in enzyme activity of up to 10.3 μmol/g·h. Furthermore, the oxidative couplings of the derivatives of trans-resveratrol were also catalyzed by immobilized TVL. The immobilized TVL was recyclable and could maintain 78% of its initial activity after reusing it four times

    Long-term influence of maize stover and its derived biochar on soil structure and organo-mineral complexes in Northeast China

    Get PDF
    The influence of biochar on the soil structure and aggregate stability has been debated in previous studies. To probe the action of biochar on soil aggregates, a 5-year field experiment was implemented in the brown earth soil of northeastern China. We determined the aggregate distribution (> 2000 μm, 250–2000 μm, 53–250 μm, and < 53 μm) and organic carbon (OC) and organo-mineral complex contents both in the topsoil (0–20 cm) and within the soil aggregates. Three treatments were studied as follows: control (basal application of mineral NPK fertilizer), biochar (biochar applied at a rate of 2.625 t ha−1), and stover (maize stover applied at a rate of 7.5 t ha−1), and all treatments received the same fertilization. The biochar and stover applications decreased the soil bulk and particle densities significantly (p < 0.05) and enhanced the soil total porosity. Both amendments significantly (p < 0.05) enhanced the total OC, heavy OC fractions, and organo-mineral complex quantities in the bulk soil as well as in all the studied aggregate fractions. Biochar and stover applications promoted the formation of small macroaggregates. A greater amount of organic matter was contained in the macroaggregates, which led to the formation of more organo-mineral complexes, thereby improving soil aggregate stability. However, the different mechanisms underlying the effect of biochar and stover on organo-mineral complexes need further research. Biochar and stover applications are both effective methods of improving the soil structure in Northeast China
    corecore